Multi-Layer Perceptron Training Optimization Using Nature Inspired Computing
نویسندگان
چکیده
Although the multi-layer perceptron (MLP) neural networks provide a lot of flexibility and have proven useful reliable in wide range classification regression problems, they still limitations. One most common is associated with optimization algorithm used to train them. The commonly training method stochastic gradient descent backpropagation (or for short) because it mathematically tractable (given that activation functions are differentiable). However, not guaranteed find globally optimal set weights biases. As result, MLP often incapable obtaining desirable solution problem. Clonal selection algorithms (CSA) procedures effectively explore complex large space values near global optimum. Consequently, CSA can be solve problem networks. This paper presents novel implementation architectures real-world problems such as breast cancer diagnosis, active sonar target classification, wheat flower classification. biases will significantly increase accuracy MLP. performance our proposed approach compared other popular methods: genetic (GA), ant colony (ACO), particle swarm (PSO), Harris hawks (HHO), moth-flame (MFO), pollination (FPA), (BP). comparison benchmarked using five datasets: Iris Flower, Sonar, Wheat Seeds, Breast Cancer Wisconsin, Haberman’s Survival. Comparative study results illustrate improvements gained by over methods, hence considered competitive when solving applications various disciplines.
منابع مشابه
Improvement of Multi-Layer Perceptron (MLP) training using optimization algorithms
Artificial Neural Network (ANN) is one of the modern computational methods proposed to solve increasingly complex problems in the real world (Xie et al., 2006 and Chau, 2007). ANN is characterized by its pattern of connections between the neurons (called its architecture), its method of determining the weights on the connections (called its training, or learning, algorithm), and its activation ...
متن کاملNew full adders using multi-layer perceptron network
How to reconfigure a logic gate for a variety of functions is an interesting topic. In this paper, a different method of designing logic gates are proposed. Initially, due to the training ability of the multilayer perceptron neural network, it was used to create a new type of logic and full adder gates. In this method, the perceptron network was trained and then tested. This network was 100% ac...
متن کاملAdvances in Multi-Objective Nature Inspired Computing
Following your need to always fulfil the inspiration to obtain everybody is now simple. Connecting to the internet is one of the short cuts to do. There are so many sources that offer and connect us to other world condition. As one of the products to see in internet, this website becomes a very available place to look for countless advances in multi objective nature inspired computing sources. ...
متن کاملMultiple Layer Perceptron training using genetic algorithms
Multiple Layer Perceptron networks trained with backpropagation algorithm are very frequently used to solve a wide variety of real-world problems. Usually a gradient descent algorithm is used to adapt the weights based on a comparison between the desired and actual network response to a given input stimulus. All training pairs, each consisting of input vector and desired output vector, are form...
متن کاملImproving Particle Swarm Optimization Based on Neighborhood and Historical Memory for Training Multi-Layer Perceptron
Many optimization problems can be found in scientific and engineering fields. It is a challenge for researchers to design efficient algorithms to solve these optimization problems. The Particle swarm optimization (PSO) algorithm, which is inspired by the social behavior of bird flocks, is a global stochastic method. However, a monotonic and static learning model, which is applied for all partic...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Access
سال: 2022
ISSN: ['2169-3536']
DOI: https://doi.org/10.1109/access.2022.3164669